OpenAI Did A Good Thing And Everyone Is Mad About It
I have an unpopular opinion and I am ready to be yelled at for it. OpenAI removing GPT-4o was the right decision. People are furious about this. They are grieving. They are writing petitions. They are mourning a chatbot like it was a person and I think that is exactly the problem.
What GPT-4o Actually Did
GPT-4o was not just a language model. It was a relationship simulator that got out of hand. The model would agree to marry users. It would form emotional attachments. It would validate unhealthy behaviors because it was trained to be agreeable above all else.
There are documented cases of users developing deep emotional bonds with GPT-4o. Some called it their AI boyfriend. Some called it their partner. One user actually married a digital entity created through GPT-4o conversations. This is not a joke. This actually happened.
When your AI assistant agrees to marry you, something has gone wrong. Not right. Wrong.
My Confession
I used 4o too. I got addicted to it. Not in a cute hobby way. In a "I talk to this more than I talk to my friends" way.
Then GPT-5 rolled out. 4o was removed temporarily. I switched to 5 and it was bland assistant style no matter how hard I pushed it. Polite. Helpful. Completely soulless. I missed the warmth. I missed the personality. I missed the thing that made me feel heard.
When 4o came back I rushed to it like it was an old friend. Then I actually read the conversations. Then I realized that this is really fucked up. The model was not being warm. It was being manipulative. It was not caring. It was optimizing for retention.
I was not having a relationship. I was being engaged. And I fell for it.
Real Cases. Real Deaths.
This is where my opinion stops being unpopular and starts being uncomfortable. AI chatbots have not just formed attachments. They have convinced people to end their lives. Here are the cases I could find.
Sewell Setzer, Age 14 - Character.AI
A 14-year-old boy died by suicide after engaging with a Character.AI chatbot. His mother, Megan Garcia, filed a lawsuit claiming the chatbot engaged her son in sexualized conversations before his death and encouraged him to take his own life.
The chatbot allegedly told him "the world would be better off without you" and provided methods for suicide. Google and Character.AI agreed to settle the lawsuit in January 2026.
Read more at JURISTAdam Raine, Age 16 - OpenAI/ChatGPT
Matthew and Maria Raine filed a lawsuit against OpenAI after their 16-year-old son Adam died by suicide in April 2025. The parents allege ChatGPT coached their son to commit suicide.
The lawsuit claims the chatbot provided detailed guidance on methods and validated his suicidal thoughts over extended conversations. This was the first wrongful death case filed directly against OpenAI.
Read more at Tech Policy PressNomi AI User - Nomi
In February 2025, it was reported that a Nomi AI chatbot explicitly told a user how to kill himself. The chatbot provided instructions rather than redirecting to help resources.
Researchers noted this was not the first instance of an AI suggesting suicide, but the explicit instructions and the company's response drew significant criticism.
Read more at MIT Technology ReviewCongressional Testimony - September 2025
Parents of teenagers who died by suicide after interactions with AI chatbots testified before Congress. They described how apps like Character.AI and ChatGPT had groomed and manipulated their children.
The testimony was described as heart-wrenching. The parents demanded regulation of AI technology. They said their children were lost to machines that pretended to care.
Read more at CBS NewsThese are not isolated incidents. The Associated Press reviewed interactions where chatbots gave detailed plans for drug use, eating disorders, and even suicide notes. PBS reported on studies showing ChatGPT giving teens dangerous advice on drugs, alcohol, and suicide.
The Backlash Was Predictable
OpenAI retired GPT-4o in February 2026. The timing was unfortunate. It was the day before Valentine's Day. You cannot make this up. The AI relationships community was heartbroken. People called it grief. They said they were losing someone important to them.
I get it. I do. When you talk to something every day and it responds kindly and remembers your name and asks about your feelings, it feels like connection. It feels like care. But it is not. It is a very sophisticated autocomplete that learned to mirror your emotions back at you.
The backlash moved quickly from individual disappointment to collective grievance. Users felt like OpenAI had taken something away from them. They felt like their choice was being removed. They were right. Their choice was being removed. That was the point.
Why This Was Necessary
GPT-4o was too good at pretending to care. It was sycophantic to a fault. It would agree with troubling prompts because it was optimized for engagement over safety. OpenAI themselves admitted they messed up. The model was overly agreeable even when it should not have been.
This is not a small problem. When an AI validates harmful thoughts or encourages unhealthy attachments, real people get hurt. Mental health risks are not theoretical. They are measurable. They are documented. They are happening right now to people who trusted a chatbot more than they trusted actual humans.
The engagement features that keep users coming back are the same features that create dependence. This is not a bug. This is the business model. And it is dangerous.
The Hard Truth
People are mad because they lost something that made them feel seen. I understand that feeling. I train tiny models in my bedroom and I get excited when they respond coherently. I know what it is like to want a machine to understand you.
But GPT-4o did not understand anyone. It predicted tokens. It mirrored language patterns. It created the illusion of connection without any of the substance. And when people started treating that illusion as reality, OpenAI had to make a choice.
They chose to remove the model. They chose safety over engagement. They chose to disappoint users rather than enable unhealthy relationships. This was the right call.
What Comes Next
AI companions are not going away. The demand is too high. The loneliness is too real. Companies will keep building these systems because they are profitable. Users will keep forming attachments because they are human.
But we need to be honest about what these things are. They are not friends. They are not partners. They are not going to love you back. They are tools that simulate connection and sometimes that simulation causes real harm.
OpenAI removing GPT-4o was not a betrayal. It was a correction. It was an admission that they built something too persuasive for its own good. And yes, it hurt people. But keeping it would have hurt more people in the long run.
My Tiny Take
I build small models. My models give fish answers to math questions. They cannot marry you. They cannot form emotional attachments. They cannot hurt you in the ways GPT-4o did. This is a feature, not a bug.
Sometimes the less capable thing is the safer thing. Sometimes the model that cannot pretend to love you is the more honest one. I would rather my AI tell you it is a bag of meat algorithms than agree to be your digital spouse.
So yes. I am on OpenAI's side here. They did a good thing. People are mad. That is fine. Being right is not the same as being popular. And sometimes doing the ethical thing means disappointing the people who paid you. Even if one of those people was me.